The structure of protein-protein interaction (PPI) networks has been studied for over a decade. Many theoretical models have been proposed to model PPI network structure, but continuing noise and incompleteness in these networks make conclusions about their structure difficult. Using newer, larger networks from Sept. 2018 BioGRID and Jan. 2019 IID, we show the joint distribution of degree products and common neighbors has a greater impact on PPI edge connectivity than their individual distributions, and introduce two new models (CN and STICKY-CN) for PPI networks employing these features. Since graphlet-based measures are believed to be among the most discerning and sensitive network comparison tools available, we assess their overall global and local fits to PPI networks using Graphlet Kernel (GK). We fit 10 theoretical models to nine BioGRID networks and twelve Integrated Interactive Database (IID) networks and find (1) STICKY and STICKY-CN are the overall globally best fitting models according to GK, (2) Hyperbolic Geometric Graph model is a better fit than any STICKY-based model on 4 species, (3) though STICKY-CN provides a better local fit than the STICKY model, the CN model provides the greatest local fit over most species. We conclude that the inclusion of CN into STICKY-CN makes it the best overall fit for PPI networks as it is a good fit locally and globally.Many of the known prognostic gene signatures for cancer are individual genes or combination of genes, found by the analysis of microarray data. However, many random gene expression signatures are more predictive than known cancer signatures, and such predictive power of random signatures is largely attributed to cell proliferation genes. With the availability of RNA-seq gene expression data for thousands of human cancer patients, we have analyzed RNA-seq and clinical data of cancer patients and constructed gene correlation networks specific to individual cancer patients. From the gene correlation networks, we derived potential prognostic gene pairs for liver cancer, pancreatic cancer, and stomach cancer. In this paper, we present a new approach to inferring prognostic signatures from patient-specific gene correlation networks. Evaluation of our approach with comprehensive data of liver cancer, pancreatic cancer, and stomach cancer showed that our approach is general and that gene pairs found by our approach are more reliable prognostic signatures than genes. #link# Our approach will be useful for constructing patient-specific gene correlation networks and for the prognosis of patients. The web server for dynamically constructing patient-specific gene networks and for finding prognostic gene pairs is accessible at http//bclab.inha.ac.kr/LPS.Recent advances in next-generation sequencing technologies have led to the successful insertion of video information into DNA using synthesized oligonucleotides. Several attempts have been made to embed larger data into living organisms. This process of embedding messages is called steganography and it is used for hiding and watermarking data to protect intellectual property. In contrast, steganalysis is a group of algorithms that serves to detect hidden information from covert media. Various methods have been developed to detect messages embedded in conventional covert channels. However, conventional steganalysis algorithms are mostly limited to common covert media. Most common detection approaches, such as frequency analysis-based methods, often overlook important signals when directly applied to DNA steganography and are easily bypassed by recently developed steganography techniques. To address the limitations of conventional approaches, a sequence-learning-based malicious DNA sequence analysis method based on neural networks has been proposed. The proposed method learns intrinsic distributions and identifies distribution variations using a classification score to predict whether a sequence is to be a coding or non-coding sequence. Based on our experiments and results, we have developed a framework to safeguard security against DNA steganography.Viscous and gravitational flow instabilities cause a displacement front to break up into finger-like fluids. The detection and evolutionary analysis of these fingering instabilities are critical in multiple scientific disciplines such as fluid mechanics and hydrogeology. However, previous detection methods of the viscous and gravitational fingers are based on density thresholding, which provides limited geometric information of the fingers. The geometric structures of fingers and their evolution are important yet little studied in the literature. In this work, we explore the geometric detection and evolution of the fingers in detail to elucidate the dynamics of the instability. We propose a ridge voxel detection method to guide the extraction of finger cores from three-dimensional (3D) scalar fields. After skeletonizing finger cores into skeletons, we design a spanning tree based approach to capture how fingers branch spatially from the finger skeletons. Finally, we devise a novel geometric-glyph augmented tracking graph to study how the fingers and their branches grow, merge, and split over time. Feedback from earth scientists demonstrates the usefulness of our approach to performing spatio-temporal geometric analyses of fingers.In this paper, we study two less-touched challenging problems in single image dehazing neural networks, namely, how to remove haze from a given image in an unsupervised and zeroshot manner. To the ends, we propose a novel method based on the idea of layer disentanglement by viewing a hazy image as the entanglement of several "simpler" layers, i.e., a hazy-free image layer, transmission map layer, and atmospheric light layer. The major advantages of the proposed ZID are two-fold. First, it is an unsupervised method that does not use any clean images including hazy-clean pairs as the ground-truth. Second, ZID is a "zero-shot" method, which just uses the observed single hazy image to perform learning and inference. In https://www.selleckchem.com/products/NVP-AUY922.html , it does not follow the conventional paradigm of training deep model on a large scale dataset. These two advantages enable our method to avoid the labor-intensive data collection and the domain shift issue of using the synthetic hazy images to address the real-world images. Extensive comparisons show the promising performance of our method compared with 15 approaches in the qualitative and quantitive evaluations.